8 research outputs found

    Visual Servoing from Deep Neural Networks

    Get PDF
    We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.Comment: fixed authors lis

    Towards Generalized Robot Assembly through Compliance-Enabled Contact Formations

    Full text link
    Contact can be conceptualized as a set of constraints imposed on two bodies that are interacting with one another in some way. The nature of a contact, whether a point, line, or surface, dictates how these bodies are able to move with respect to one another given a force, and a set of contacts can provide either partial or full constraint on a body's motion. Decades of work have explored how to explicitly estimate the location of a contact and its dynamics, e.g., frictional properties, but investigated methods have been computationally expensive and there often exists significant uncertainty in the final calculation. This has affected further advancements in contact-rich tasks that are seemingly simple to humans, such as generalized peg-in-hole insertions. In this work, instead of explicitly estimating the individual contact dynamics between an object and its hole, we approach this problem by investigating compliance-enabled contact formations. More formally, contact formations are defined according to the constraints imposed on an object's available degrees-of-freedom. Rather than estimating individual contact positions, we abstract out this calculation to an implicit representation, allowing the robot to either acquire, maintain, or release constraints on the object during the insertion process, by monitoring forces enacted on the end effector through time. Using a compliant robot, our method is desirable in that we are able to complete industry-relevant insertion tasks of tolerances <0.25mm without prior knowledge of the exact hole location or its orientation. We showcase our method on more generalized insertion tasks, such as commercially available non-cylindrical objects and open world plug tasks

    Aller plus loin avec les asservissements visuels directs

    Get PDF
    In this thesis we focus on visual servoing (VS) techniques, critical for many robotic vision applications and we focus mainly on direct VS. In order to improve the state-of-the-art of direct methods, we tackle several components of traditional VS control laws. We first propose a method to consider histograms as a new visual servoing feature. It allows the definition of efficient control laws by allowing to choose from any type of his tograms to describe images, from intensity to color histograms, or Histograms of Oriented Gradients. A novel direct visual servoing control law is then proposed, based on a particle filter to perform the optimization part of visual servoing tasks, allowing to accomplish tasks associated with highly non-linear and non-convex cost functions. The Particle Filter estimate can be computed in real-time through the use of image transfer techniques to evaluate camera motions associated to suitable displacements of the considered visual features in the image. Lastly, we present a novel way of modeling the visual servoing problem through the use of deep learning and Convolutional Neural Networks to alleviate the difficulty to model non-convex problems through classical analytic methods. By using image transfer techniques, we propose a method to generate quickly large training datasets in order to fine-tune existing network architectures to solve VS tasks.We shows that this method can be applied both to model known static scenes, or more generally to model relative pose estimations between couples of viewpoints from arbitrary scenes.Dans cette thèse, nous nous concentrons sur les techniques d'asservissement visuel (AV), critiques pour de nombreuses applications de vision robotique et insistons principalement sur les AV directs. Afin d'améliorer l'état de l'art des méthodes directes, nous nous intéressons à plusieurs composantes des lois de contrôle d'AV traditionnelles. Nous proposons d'abord un cadre générique pour considérer l'histogramme comme une nouvelle caractéristique visuelle. Cela permet de définir des lois de contrôle efficaces en permettant de choisir parmi n'importe quel type d'histogramme pour décrire des images, depuis l'histogramme d'intensité à l'histogramme couleur, en passant par les histogrammes de Gradients Orientés. Une nouvelle loi d'asservissement visuel direct est ensuite proposée, basée sur un filtre particulaire pour remplacer la partie optimisation des tâches d'AV classiques, permettant d'accomplir des tâches associées à des fonctions de coûts hautement non linéaires et non convexes. L'estimation du filtre particulaire peut être calculée en temps réel à l'aide de techniques de transfert d'images permettant d'évaluer les mouvements de caméra associés aux déplacements des caractéristiques visuelles considérées dans l'image. Enfin, nous présentons une nouvelle manière de modéliser le problème de l'AV en utilisant l'apprentissage profond et les réseaux neuronaux convolutifs pour pallier à la difficulté de modélisation des problèmes non convexes via les méthodes analytiques classiques. En utilisant des techniques de transfert d'images, nous proposons une méthode permettant de générer rapidement des ensembles de données d'apprentissage de grande taille afin d'affiner des architectures de réseau pré-entraînés sur des tâches connexes, et résoudre des tâches d'AV. Nous montrons que cette méthode peut être appliquée à la fois pour modéliser des scènes connues, et plus généralement peut être utilisée pour modéliser des estimations de pose relative entre des couples de points de vue pris de scènes arbitraires

    Histograms-based Visual Servoing

    Get PDF
    International audienc

    Particle Filter-based Direct Visual Servoing

    Get PDF
    International audienceWith respect to classical visual servoing (VS) technics based on geometrical features, the main drawback of direct visual servoing is its limited convergence area. In this paper we propose a new direct visual servoing control law that relies on a particle filter to achieve non-local and non-linear optimization in order to increase this convergence area. Thanks to multi-view geometry and image transfer techniques, a set of particles (which correspond to potential camera velocities) are drawn and evaluated in order to evaluate the best camera trajectory. This new control law is validated on a 6 DOF positioning task performed on a real gantry robot and statistical comparisons are also provided from simulation results

    Asservissement visuel direct basé sur des histogrammes d'intensité

    Get PDF
    National audienceClassiquement, l'asservissement visuel a consisté à contrôler les mouvement d'un ensemble de primitives visuelles (souvent de type géométrique). Récemment, un contrôle de type asservissement visuel photométrique à été proposé, de manière à considérer l'image comme un ensemble et ainsi permettant d'éviter d'extraire et de suivre des primitives géométriques.Des travaux antérieurs ont proposé d'utiliser directement les intensités de l'image pour définir une loi de contrôle. Dans cet article, nous proposons une extension de ces travaux en utilisant des descripteurs globaux, ici des histogrammes d'intensité, pris sur l'intégralité de l'image ou sur plusieurs sous-ensembles de l'image, de manière à arriver à un contrôle des 6 degrés de liberté (ddl.) d'un robot. Les résultats sont ensuite démontrés \textit{via} des validations expérimentales.</p

    Direct visual servoing based on multiple intensity histograms

    Get PDF
    International audience— Classically Visual servoing considered the regulation in the image of a set of visual features (usually geometric features). Recently direct visual servoing scheme, such as photometric visual servoing, have been introduced in order to consider every pixel of the image as a primary source of information and thus avoid the extraction and the tracking of such geometric features. Previous works proposed methods to use directly the image intensities in the definition of the control law, by using for example mutual information. In this paper, we propose a method to extend these works by using a global descriptor, namely intensity histograms, on the whole or multiple subsets of the images in order to achieve control of a 6 degrees of freedom (DoF) robot. The results are then demonstrated through experimental validations

    Training Deep Neural Networks for Visual Servoing

    Get PDF
    International audienceWe present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF positioning tasks by visual servoing. A convolutional neural network is fine-tuned to estimate the relative pose between the current and desired images and a pose-based visual servoing control law is considered to reach the desired pose. The paper describes how to efficiently and automatically create a dataset used to train the network. We show that this enables the robust handling of various perturbations (occlusions and lighting variations). We then propose the training of a scene-agnostic network by feeding in both the desired and current images into a deep network. The method is validated on a 6 DOF robot
    corecore